Inverse Gaussian distribution

Inverse Gaussian
Probability density function
Parameters \lambda > 0
 \mu > 0
Support  x \in (0,\infty)
PDF  \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}
CDF  \Phi\left(\sqrt{\frac{\lambda}{x}} \left(\frac{x}{\mu}-1 \right)\right) %2B\exp\left(\frac{2 \lambda}{\mu}\right) \Phi\left(-\sqrt{\frac{\lambda}{x}}\left(\frac{x}{\mu}%2B1 \right)\right)

where  \Phi \left(\right) is the standard normal (standard Gaussian) distribution c.d.f.

Mean  \mu
Mode \mu\left[\left(1%2B\frac{9 \mu^2}{4 \lambda^2}\right)^\frac{1}{2}-\frac{3 \mu}{2 \lambda}\right]
Variance \frac{\mu^3}{\lambda}
Skewness 3\left(\frac{\mu}{\lambda}\right)^{1/2}
Ex. kurtosis \frac{15 \mu}{\lambda}
MGF e^{\left(\frac{\lambda}{\mu}\right)\left[1-\sqrt{1-\frac{2\mu^2t}{\lambda}}\right]}
CF e^{\left(\frac{\lambda}{\mu}\right)\left[1-\sqrt{1-\frac{2\mu^2\mathrm{i}t}{\lambda}}\right]}

In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞).

Its probability density function is given by

 f(x;\mu,\lambda)
= \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}

for x > 0, where \mu > 0 is the mean and \lambda > 0 is the shape parameter.

As λ tends to infinity, the inverse Gaussian distribution becomes more like a normal (Gaussian) distribution. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading: it is an "inverse" only in that, while the Gaussian describes a Brownian Motion's level at a fixed time, the inverse Gaussian describes the distribution of the time a Brownian Motion with positive drift takes to reach a fixed positive level.

Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable.

To indicate that a random variable X is inverse Gaussian-distributed with mean μ and shape parameter λ we write

X \sim IG(\mu, \lambda).\,\!

Contents

Properties

Summation

If Xi has a IG(μ0wiλ0wi2) distribution for i = 1, 2, ..., n and all Xi are independent, then


S=\sum_{i=1}^n X_i
\sim
IG \left(  \mu_0 \sum w_i, \lambda_0 \left(\sum w_i \right)^2  \right).

Note that


\frac{\textrm{Var}(X_i)}{\textrm{E}(X_i)}= \frac{\mu_0^2 w_i^2 }{\lambda_0 w_i^2 }=\frac{\mu_0^2}{\lambda_0}

is constant for all i. This is a necessary condition for the summation. Otherwise S would not be inverse Gaussian.

Scaling

For any t > 0 it holds that


X \sim IG(\mu,\lambda) \,\,\,\,\,\, \Rightarrow \,\,\,\,\,\, tX \sim IG(t\mu,t\lambda).

Exponential family

The inverse Gaussian distribution is a two-parameter exponential family with natural parameters -λ/(2μ²) and -λ/2, and natural statistics X and 1/X.

Relationship with Brownian motion

The stochastic process Xt given by

X_0 = 0\quad
X_t = \nu t %2B \sigma W_t\quad\quad\quad\quad

(where Wt is a standard Brownian motion and \nu > 0) is a Brownian motion with drift ν.

Then, the first passage time for a fixed level \alpha > 0 by Xt is distributed according to an inverse-gaussian:

T_\alpha = \inf\{ 0 < t \mid X_t=\alpha \} \sim IG(\tfrac\alpha\nu, \tfrac {\alpha^2} {\sigma^2}).\,

When drift is zero

A common special case of the above arises when the Brownian motion has no drift. In that case, parameter μ tends to infinity, and the first passage time for fixed level α has probability density function

 f \left( x; 0, \left(\frac{\alpha}{\sigma}\right)^2 \right)
= \frac{\alpha}{\sigma \sqrt{2 \pi x^3}} \exp\left(-\frac{\alpha^2 }{2 x \sigma^2}\right).

This is a Lévy distribution with parameter c=\frac{\alpha^2}{\sigma^2}.

Maximum likelihood

The model where


X_i \sim IG(\mu,\lambda w_i), \,\,\,\,\,\, i=1,2,\ldots,n

with all wi known, (μλ) unknown and all Xi independent has the following likelihood function


L(\mu, \lambda)=
\left(      \frac{\lambda}{2\pi}   \right)^\frac n 2  
\left(      \prod^n_{i=1} \frac{w_i}{X_i^3}    \right)^{\frac{1}{2}} 
\exp\left(\frac{\lambda}{\mu}\sum_{i=1}^n w_i -\frac{\lambda}{2\mu^2}\sum_{i=1}^n w_i X_i - \frac\lambda 2 \sum_{i=1}^n w_i \frac1{X_i} \right).

Solving the likelihood equation yields the following maximum likelihood estimates


\hat{\mu}= \frac{\sum_{i=1}^n w_i X_i}{\sum_{i=1}^n w_i}, \,\,\,\,\,\,\,\, \frac{1}{\hat{\lambda}}= \frac{1}{n} \sum_{i=1}^n w_i \left( \frac{1}{X_i}-\frac{1}{\hat{\mu}} \right).

\hat{\mu} and \hat{\lambda} are independent and


\hat{\mu} \sim IG \left(\mu, \lambda \sum_{i=1}^n w_i \right)  \,\,\,\,\,\,\,\, \frac{n}{\hat{\lambda}} \sim \frac{1}{\lambda} \chi^2_{n-1}.

Generating random variates from an inverse-Gaussian distribution

The following algorithm may be used.[1]

Generate a random variate from a normal distribution with a mean of 0 and 1 standard deviation


\displaystyle \nu = N(0,1).

Square the value


\displaystyle y = \nu^2

and use this relation


x = \mu %2B \frac{\mu^2 y}{2\lambda} - \frac{\mu}{2\lambda}\sqrt{4\mu \lambda y %2B \mu^2 y^2}.

Generate another random variate, this time sampled from a uniformed distribution between 0 and 1


\displaystyle z = U(0,1).

If


z \le \frac{\mu}{\mu%2Bx}

then return


\displaystyle
x

else return


\frac{\mu^2}{x}.

Sample code in Java:

public double inverseGaussian(double mu, double lambda) {
       Random rand = new Random();
       double v = rand.nextGaussian();   // sample from a normal distribution with a mean of 0 and 1 standard deviation
       double y = v*v;
       double x = mu + (mu*mu*y)/(2*lambda) - (mu/(2*lambda)) * Math.sqrt(4*mu*lambda*y + mu*mu*y*y);
       double test = rand.nextDouble();  // sample from a uniform distribution between 0 and 1
       if (test <= (mu)/(mu + x))
              return x;
       else
              return (mu*mu)/x;
}

Related distributions

See also

Notes

  1. ^ Generating Random Variates Using Transformations with Multiple Roots by John R. Michael, William R. Schucany and Roy W. Haas, American Statistician, Vol. 30, No. 2 (May, 1976), pp. 88–90

References

External links